Contactless Cookbook

Created by
Alex LoCicero (adl96) and Yingjie Zhao (yz483)


Introduction

Motivation

We began our design process with empathy fieldwork: observations, engagements, and immersions that we hoped would reveal unaddressed needs and pain points. These needs and pain points, in turn, would serve as the motivation for our product and guide the rest of the design process. Eventually, our empathy fieldwork introduced us to an interesting pain point that arises when college students cook meals. We noticed that our roommates and friends often use personal devices to look up recipes or stream TV while cooking. Many students find that using a personal device is difficult because hands become wet or dirty while cooking, and as a result, they end up repeatedly washing and drying off their hands in order to touch their phone screen to scroll through a recipe, start the next TV episode, or adjust their device’s position as they move around the kitchen. This process is annoying a significant pain point. Furthermore, due to the pandemic, more and more college students are cooking from home, so this pain point is also prescient and a great one to address.

Objective

device

Our objective is to create a completely contactless, stripped-down version of Amazon’s Alexa that college students can afford. Our device allows the student to navigate through recipes and videos using voice commands and adjust the screen’s position using hand motions. As the student walks from the cutting board to the stove top, they won’t have to touch their personal device with their greasy hands in order to see the next cooking instruction.


  • Digital Cookbook Database
  • User Interface (UI)
  • Motion Controlled Display Mount
  • Voice Recognition for UI Navigation


Mechanical Design

Overview

Here we describe the structural components of the mechanical system as well as the actuator. The purpose of structural components is to keep the display up and out of the way to save kitchen counter space. The purpose of the actuator is to keep the display within view as the user moves around the kitchen. Below, in Figure 1, the components of the mechanical design are annotated.

CAD render

Figure 1: CAD Rendering of Mechanical Design

Structure


lamp

The structural design of our device is composed of an aluminum base and stem as well as a 3D printed cap. Several iterations converged to this design choice, and each of the concepts generated is based off a desk lamp we had on-hand (shown on the right). The desk lamp provides several useful components including reconfigurable linkages and a heavy base that can support the rotating display by maintaining the center of mass of the system close to the table surface and near the center of the support structure. sketches of the three main concepts considered are shown in Figure 2. Originally, a 2 DOF robotic arm (shoulder and wrist joint) was considered. However, once the display was selected, we concluded that the large moment produced by a 350g screen would require a robust structure with expensive motors. Furthermore, a robotic arm seemed to be an overly complex design choice that offered little advantage to the user. Concept 1, ultimately, created unnecessary complexity and cost. We decided a simpler, inexpensive design would be preferable. The second consideration was a crane design, Concept 2. Rather than a robotic arm with two joints acting against large moments, a crane design, with only one joint, would require a motor to solely overcome the inertia and friction of the extended arm. Ultimately, we converged to Concept 3 which essentially replaces a long horizontal beam for a curved, off-the-shelf dashboard mount that is easier to attach to the rotating element (uses a suction cup).


Figure 2: Sketches of main concepts generated during iterative ideation


Note the decreasing complexity as we converge to the final solution, Concept 3. The aluminum base and stem in this final design are scrap parts from that desk lamp. During assembly and testing, these parts prove to be a sturdy foundation to counter the moment of the display and dashboard mount. The third and final component of the structural design is a 3D printed cap which is fastened to the stem and simultaneously supports the turntable and mounts the servo. The cap experiences significant loading from the display/mount moment, so a FEA study is conducted to ensure (1) the Von Mises stresses are below the yield stress of PLA, 50 MPa, and (2) the maximum displacement is a reasonable value. The FEA study results are displayed in Figure 3. A safety factor of 1.5 is used. If the cap fractures, the expensive display is likely to be damaged. Additionally, a small dynamic load is expected when the user uses the touch screen option. Since no injury is expected from the cap fracturing, a larger safety factor of 2 was deemed excessive. Refer to Figure 4 for drawings of the cap component.

displacement.png
Von Mises.png

Figure 3: FEA of 3D printed Cap (PLA)

As expected the maximum displacement occurs at the front of the cap. A 6mm deflection is acceptable since the display can be adjusted to maintain the desired angle. The position and magnitude of the maximum displacement is expected to vary slightly as the display rotates in the horizontal plane. Also as expected, the maximum stress occurs at the front edge where the platform supporting the turntable meets the sheath that connects the cap to the stem. We made sure to place a gusset plate support between these two faces. The Von Mises stress distribution indicates a maximum stress of 1.4 MPa which is well below the yield strength of PLA, 50 MPa.

Cap1.png
Cap2.png

Figure 4: Cap drawings. All measurements are in inches and the material is generic PLA

Actuation


To rotate the screen so that it can face any part of the kitchen, we construct a turntable. The turntable is essentially a ball bearing composed of two, 3D printed plates and 8, ¼” diameter steel ball bearings. A ball bearing design is used to minimize friction in the system, especially friction at the front of the cap where there is significant loading from the display. The bottom plate serves as a channel for the balls to roll smoothly along during rotation; this plate is fixed to the cap with four screws and heat-set inserts. The top plate serves to constrain the balls at equal angular distances from one another during rotation; this plate is fastened to the servo horn with four machine screws. As expected, the unequal loading on the turntable due to the display’s position results in a slight lift at the back of the turntable as shown in Figure 5. The depth of the ball constraints was made large enough to account for this lift. Drawings of the plates, including the channel and the table, can be found in Figure 6.


lifted back

Figure 5: Lifted back of turntable due to moment caused by display and mount

Channel.png
Table.png

Figure 6: Turntable drawings. All measurements are in inches and the material is generic PLA

An MG995 standard servo is used as the actuator that rotates the table. Although the ball bearings reduce the friction of the turntable, there is still significant inertia that the servo needs to overcome. The MG995 standard servo offers a maximum torque of 13 kgf-cm and is recommended for a torque of 10 kgf-cm or 0.98 N-m. We can not theoretcially calculate the torque required to overcome the friction of the system because the magnitude of the friction force is not known. Because the MG995 Servo is used in applications such as RC cars; we expect it to have more than a sufficient torque. It is possible to approximate the maximum angular acceleration that this motor can achieve given the system's moment of inertia. A conservative estimate of the maximum allowable angular acceleration given the maximum operating torque was calculated as follow:


[Theoretical Maximum Angular Acceleration]
τ = Iα
mTotal = mTurntable + mmount + mDisplay = 100 g + 250 g + 300 g = 650 g
I= 0.5 mTotal R2 SF = 0.5 (0.650kg) (12cm)2 1.5 = 70.2 kg-cm2 = 0.00702 kg-m2
Iα < 0.98 Nm
α < 140 rad/s2


servos

With a safety factor of 1.5, we conclude that the maximum allowable angular acceleration of the servo is as high as 140 rad/s2. This is far greater than the system needs to be accelerated; thus, the MG995 standard servo has more than sufficient torque to overcome the inertia of the system. During testing, the MG995 servo proved to have sufficient torque to rotate the display; unfortunately, after a month of testing, a single plastic gear in the gear train wore completely down which rendered the servo useless. We had incorrectly assumed that the gear train was entirely metal rather than partially metal, so we were not anticipating gear wear. In the end of the motion control section, we discuss this problem and our solution more fully.

In addition to the torque requirment, we consider the maximum rotation requirment. We reduce the maximum rotation design requirement by assuming that the display only needs to rotate 120 degrees to face any part of the kitchen. College students typically have small kitchens and rarely have island countertops so this is a fair assumption. According to its specifications, the MG995 standard servo can rotate 180 degrees. We find that this maximum angle is closer to 110 degrees in reality; fortunately this rotation is still reasonable in practice. Further details regarding the actuator are discussed in the Motion Control section.


Motion Control

Overview

With a robust mechanical design, we begin addressing the “contactless” requirement of our device. Before designing the motion control system, we recorded several observations of our friends as the move around a kitchen. We witnessed our friends go through the annoying, repetitive process of having to wash and dry hands every time they wish to reorient their screen, swipe through a recipe, or click through a video. We want Contactless Cookbook to address this problem. With motion control, specifically, we would like to help them avoid having to wash and dry their hands every time they wish to reorient the display.

System

Here we describe the motion control system. Motion control is implemented via a simple system. Two ultrasonic sensors take turns recording the time between the trigger and echo signal. This time difference is converted into a distance via a known formula given by the datasheet. Once both distances are calculated, the script checks each distance to determine if the user's hand is within 8 cm of one (or both) of the sensors. If a sensor records a distance that is less than 8 cm, the corresponding direction is triggered and the servo value will be incremented or decremented by 0.05. Note that there is a software stop that prevents the script from commanding the servo to rotate past its extremes (resulting in an error). Calibration of this process is dicussed later in the report. Initially, motion control was implemented in an independent script motion_control.py which would run in the backgroung at startup. However, when we implemented voice recognition and UI, we discovered that the continuous polling from the ultrasonic sensor's required too much computing power. We attempted to use multiprocessing where a single core would be reserved for motion_control.py, but the reassigning of GPIO's in voice_control.py caused motion_control.py to crash. Our solution is to combine motion_control.py with voice_control.py. Essentially, when the user speaks "motion", voice_control.py will cease recording audio and reserve 10 seconds for the motion control logic. Thus, motion control and voice control are never active at the same time. We believe this is an exceptable solution since the user is unlikely to move the screen and navigate the UI simultaenously. Refer to the schematic below for a flow diagram illustrating the motion control system.


motion control flow



Figure 6: Flow diagram for motion control


Electrical

Here, we discuss our electrical implementation of motion control. The components of the motion control system are a Raspberry Pi 4, two Adafruit Ultrasonic Sensors, and the MG995 servo introduced in the previous section. A circuit diagram illustrating the system is shown below in Figure 7. Note that in addition to the motion control circuit shown, there is a display screen connected to the ground and power shown. GPIO's 16, 17, 21, and 22 are reserved for the ultrasonic sensors' trigger and echo pins, and GPIO 13 is reserved for the servo.


circuit1



Figure 7: Circuit diagram for motion control

To execute motion control each ultrasonic sensor is associated with either left (clockwise rotation) or right (counter-clockwise rotation). To rotate the screen in the desired direction, the user need only place their hand within 8 cm of the corresponding ultrasonic sensor mounted at the base of the device. The 40 pin breakout cable connects the Raspberry Pi 4 to the breadboard. Figure 8 provides a visual of the setup, and for a live demonstration, please refer to the Demo section at the top of this page.


electrical

electrical2


Figure 8: Hardware setup for motion control

To complete the electrical assembly, we build a two-part electrical enclosure that simulataneously covers the ciruitry and mounts the both sensors. Both sensors are positioned at either side of the base. The Raspberry Pi 4 and breadboard are positioned together within this enclosure to conserve space and maintain systems center of mass close to the base of the device. Figure 9 displays the electrical enclosure.

back
front

Figure 9: Electrical enclosure

Calibration

The distance calculations for both ultrasonic sensors as well as the communication between the sensors and the servo are timed with several delays that are adjusted until a comfortable and smooth rotation rate is achieved for the user. We don't want the display to rotate too fast or too slow, and we want to avoid as much jitter as possible. The rotation speed and smoothness was designed iteratively. The rotation speed is adjusted by refining the increment added/subtracted to/from the servo value. A value of 0.05 is chosen to be a comfortable value. The jitter was reduced by refining the delay placed before the sensors are fired. A value that is two large here results in a shaky rotation rate whereas a value that is too small results in a high computation power requirement as this increases the polling rate. In addition to turning these calibration knobs, we must also calibrate a few other time delays. In the event that the user triggers both directions at the same time, the script "fires" each ultrasonic sensor in succession with a delay in between so there is no interference. This delay value is reduced minimized such that there is no interference. Lastly, two more minimized time delays are placed after the servo adjustments to allow the servo to react.

Testing

The motion control system was initially tested with a single sensor and one direction of rotation. The original motion_control.py script (later combined with voice_control.py) prints the distance calculations for troubleshooting. During testing, we adjusted the sensor trigger distance as wellas the roation speed and jitter (discussed above) until we were content with the user experience. Implmenting a second sensor is largley just duplicating the code. Although, as mentioned previously, we did need to take precautions to avoid sensor interference. Videos of the single-sensor test and double-sensor test are shown below. Note that we loaded the turntable with the mount and display in order to check that the servo provides sufficient torque, the rotation rate is comfortable, and the maximum rotation angles are even and sufficiently large. The rotation rate was deemed sufficiently smooth and reactive. We were unable to completely eliminate jitter during rotation and idling; however, combing the motion control with voice control such that the motion control logic is only executed when the user needs it dramatically reduced the idling jitter.


Single Sensor Test

Double Sensor Test


Problem and Solution

When purchasing the MG995 servo, we selected the metal gear version to ensure our gear train prevailed under high torque cycles. We did not realize that the gear train was actually a mix of metal and plastic gears. After a month of testing motion control, one of the gears was worn completely and the gear train no longer engaged with the output gear (rendering the sero useless). A picture of the worn down gear is shown in Figure 10.


gear

Figure 10: Worn out gear

To improve the lifetime of our motion control system, we implement a new shutdown protocal. Oroginally, when motion contorl is ended, the servo stops where it is. As a result, when the servo starts up again, the servo quickly rushes to the intilized position (servo value equals 1). This fast operation requires significant torque and is likely the cause of significant wear. Our solution is fairly simple. We implement a button interrupt (active low) connected to GPIO 16. To end motion_control.py, the user needs to press this button. When pressed, the interrupt commands the servo to return to the initialized position (servo.value = 0) at the "comfortable" roation rate defined earlier. The script is then ended (while loop is exited). Because the servo returns to the intialized position at the end of the script, when motion_control.py starts up, the servo does not have to quickly find the initalized position, it knows where it is.

fix code

Figure 11: New exit protocal

fix code2

Figure 12: Bailout button


Voice Recognition

Overview

The next step in addressing the “contactless” requirement of our device is to implement voice recognition. Again, we are helping our friends avoid the annoying, repetitive process of having to wash and dry hands every time they wish to reorient their screen, swipe through a recipe, or click through a video. Motion control allows for contactless display re-orientation. As will be discussed here, voice recognition allows for contactless UI navigation.

System

Here we describe the voice control system. The system works as follows. First, the voice recognition module captures and analyzes voice commands. This process runs continuously to record 3 second snippets of audio through the USB microphone. The audio is then saved as WAV and is sent to the Google Speech Engine which recognizes any English words in the recording. Once a word is found, it is forwarded to the checker function to determine if this word is in the keyword list. This module contains three files, record.py, speech.py and voice.py. The record.py uses thr PyAudio library and opens the microphone and records an audio snippet then saves on the local disk. The speech.py, using the SpeechRecognition library, uploads this audio snippet to Google Speech Engine and obtains the analyzed result. The voice.py serves as the main executable script when the module is run. The main process runs in a continuous loop, repeatedly recognizing voice commands and checking them against the keyword list. If the MOTION command is detected, the voice recognition will pause for a predetermind amount of time (we have set this time to 10 seconds) and switches to the motion control logic. Once the predetermined time period has passed (10 seconds), it will switch back to the voice recognition process. If the QUIT command is detected, the script breaks from the loop and executes the shutdown protocal described in the motion control section of this report. Please refer to the schematic below for a flow diagram illustrating the motion control system.


voice system



Figure 13: Flow diagram for voice control + motion control.


Electrical

The USB mini-microphone was connected directly to a USB port on the Raspberry Pi, and was used for speech recognition of voice commands. To indicate that the microphone is recording, we implement a green feedback LED. We connect the LED, in series with a 220 ohms resistor, to GPIO 21. Additionally, we have a physical push button connected on GPIO 16, with a pull-up resistor. This bail-out button is used to manually quit the voice/motion control.


circuit2



Figure 14: Circuit diagram for voice control + motion recognition.

Testing

Testing voice recognition is simple. We add print statements at each point in the voice recognition process. When testing, we first confirm that the LED lights up when the "recording" statement appears. Then we speak the word and confirm the printed output matches what was said. When using voice recognition with our masks on, we discovered that the microphone often misheard what was spoken. For instance, "steps" was heard as "step". To address this issue, we add multiple keyword options (based on testing results) into a dictionary for each command.



User Interface

Overview

With motion control and voice recognition implemented, we begin designing the user interface which is displayed on the 7” touch screen display. Our UI is intentionally kept simple primarily because we intend for this design project to showcase our mechanical and electrical technical skills rather than our UX skills. The UI has four levels: title, menu, setup, and steps as well as three added features: a timer and video for each recipe and a global clock. The UI is implemented by adapting the pygame code from GREY.py, a python 3, 2-D game. This base code was appealing since it already set up multiple UI levels for the user to navigate through. Much of our UI interface was created by moving the locations of buttons, changing text, and inserting our own photos. Our most significant contributions to the base code were the global clock, timers, and videos. The background images were designed using Canva, and stock videos were taken from Pexels.

System

Here we describe the user interface system. The UI system is built with PyGame and is displayed on a 7" touchscreen for user interaction. Initially, there are three splash screens. The first one shows the "Cookbook" logo itself. After 2 seconds it will turn to the second splash screen automatically. The second one shows "Cookbook" text plus the logo. After another 2 seconds, it will turn to the third splash screen automatically. the third one adds a start notice. On the third splash screen, if the user taps the screen or speaks "start", it will jump into the main menu. Otherwise, it will remain on the third splash screen. In the main menu, there are five food buttons at the bottom, a real-time clock on the top-right corner, and a quit button on the right side. The quit button allows the user to exit the UI system, and the user can also speak "quit" to trigger it. Clicking on one of the food buttons or speaking out the food name will jump into the ingredients screen, which shows ingredients for corresponding food with a food picture underneath. A "Show Steps" button allows the user to jump into the next level which shows each step of the recipe. On the right-hand side, there are two buttons, one is "video" which will play a short clip in order to see a step by step instruction video or a relevant cooking technique. Another button is "Start Timer", which will start a count-down timer. Once the timer starts, the button will change to "Reset Timer", which allows the user to reset the timer. Please refer to the schematic below for a flow diagram illustrating the system.


ui system



Figure 15: Flow diagram for user interface




Level 1: Title

The title screen was the most carefully designed aesthetic of our device. It is intended to advertise our brand while grabbing the user’s attention. The Title level (actually three levels) is an attractive animation. To navigate to the Menu, the user can either speak the “start” or touch the start button.


title

Figure 16: Title page animation

Level 2: Menu

Considering that the objective of this project is to develop a contactless cookbook for college students, our menu includes a small set of simple, popular recipes. In fact, the recipes, cited at the end of this report, were recommended by our classmates. To navigate from the menu, the user can either speak the text written on the corresponding button or utilize the touch screen.

menu

Figure 17: Menu page with recipe options

Level 3: Ingredients

The final two levels mirror the structure of most physical and digital recipes. In the Ingredients level, the ingredients for the recipe (selected from Menu) are displayed at the center of the screen. The user may choose to stay on this page until all ingredients have been gathered, or they can navigate back to this page at any time from the next level - Steps.

ingredients

Figure 18: Ingredients page

Level 4: Steps

This final level is a hub of information for each recipe. At the center of the screen, cooking instructions are listed. In speaking with potential users, we discovered that college students often refer to youtube videos for additional information (i.e. how to mix the egg into the pasta without allowing it to clump together). The embedded videos are a response to this user feedback. Rather than fumbling around with a personal device to find a youtube video, the user need only say “video” or tap the button to view a cooking video related to the recipe. We also include an embedded timer that the user can activate in a similar way. The advantage of this voice activated timer compared to that of the Amazon Alexa is that this timer is preset for the recipe steps. The timer can be reset after it has been activated.

steps1b.png
steps2b.png

Figure 19: Steps page with embedded timer and video

Electrical

Testing for the UI was focused on the user experience. We wanted the layout to feel intuitive, text to be readable, and recipes to be attractive. To test, we interviewed showed the Contactless Cookbook to classmates and teaching staff and observed how they interacted with the device. Through itterations, we increased the size of text and repositioned buttons. One finding during testing was a screen flicker. The screen flicker occurs at the same frequency as the motor. The source of this issue is the fact that our servo is using the Raspberry Pi's 5V power source. We can easily address this issue by having the servo draw power from an external power supply (battery pack).


ingredients



Figure 20: Circuit diagram for UI + voice recognition + motion control


Future Work

Although we are content with having executed the major objectives of this project, there are quite a few improvements and additional features that we would like to add to this project in the future. Since the ideation phase of this project, we have been intently focused on making the product useful and intuitive for the user. Here, at the end of the semester, we review our progress and consider what the next steps would be to make our product even better for them.

The motion control feature has several areas of improvment that were discovered during integration with voice recognition and th euser interface. As discussed previously, a simple improvement is to have the servo draw power from a battery pack rather than the Raspberry Pi; this solves display screen flicker. Another change we would want to make is using hardware PWM. When running motion control in isolation, we show that servo rotation is sufficiently smooth; however, when integrated into the full system, motor jitter grows significantly. By offloading the processing power to a subprocessor, the servo will no longer need to compete for processing power. In tandem with this change we would also want to convert our increment/decrement method to PID control. Again, the rotation is reasonable smooth, but implementing PID would be a quick adjustement that would greatly reduce jitter and improve the life of the servo. As an added benefit, reducing jitter also reduces the amount of noise that needs to be filtered out during voice recognition - noise that forces the user to speak louder, closer, and clearer. One last note is made about multiprocessing strategies. To minimize motor jitter, our first technique was to reserve one of the four cores for a motion_control.py script (in hindsight we should have attempted hardware PWM and PID control first). When we did this, there were conflictions between GPIO's being set in different scripts (voice.py, motioncontrol.py, cookbook.py) at the ame time which resulted in motion_control.py completely shutting down. In the future, we might also consider resolving these conflictions - if not for improving motion control, then for improving voice control and UI performance.

The voice recognition feature has a lot of room for improvement; however this improvement is less work on our end and more about purchasing a high quality microphone (or multiple) and a subscription to a high quality voice recognition system. The quality of our current voice recogntition feature is not surprising considering we were using an open source voice recognition resource and a cheap microphone. Ideally, with these hardware and software enhancements, the user would not have to wait for an LED to flash to speak a command and the voice commands would be able to be heard from anywhere in the user's loud kitchen.

Perhaps the most exciting aspect of this project is how much of the UI can be expanded. In our initial proposal for the Contactless Cookbook, we generated a long list of UI extensions that could be pursued as time allowed. For instance, the cookbook database could include different modes such as "vegtarian mode" or "date-night mode" which would suggest a unique set of recipes to choose from. Other features could be added such as allowing the user to upload their own favorite recipes or download the ingredients page as a grocery list. In addition to the endless possibilities of potential add-ons, there are some simple improvements that we would make with additional time. Our first step would be to adjust timers such that they acutally follow along with the recipe. At present, our timer is more a proof-of-concept rather than a useful feature. Additionally we would like to replace the stock videos with instructional videos that provide helpful insights into executing the recipe. We would film these videos ourselves to avoid copyright infringements. Lastly, we considered breaking up the steps into individual levels that the user can navigate through; however, with the current voice recognition system, this would be too time consuming for the user. With a quicker voice recognition system, breaking up the steps would allow us to make the instructions more of a journey than a cheat-sheet. With more space, we could provide pictures, videos, and timers for each individual instruction.


Budget

Item Name Price Quantity
MG995 Servo $8.50 1
Servo Horn $1.12 1
7" Touchscreen Display $60 1
Ultrasonic Sensor (HC-SR04) $1.40 2
Dashboard Mount $15 1
Stainless Steel Bearing Balls $4.75 1 (8 bearings)
3D Printing Filament $20 1/4 roll
Desk Lamp $0 1
Microphone $0 1
Total $97.17


Work Distribution

Alex

Alex LoCicero
Mechanical Design, Motion Control, and UI

Yingjie

Yingjie Zhao
Voice Recognition and UI





Code Appendix


                  

Cookbook.py

import pygame
import sys
import time
import os

# Command list
COMMANDS = ['start', 'back', 'steps', 'video', 'timer', 'reset', 'spaghetti', 'fish', 'chicken', 'soup', 'sushi', 'pause', 'stop', 'menu']

pygame.init()

# Size
screen_w=800
screen_h=480

# Colors
black=(0,0,0)
white=(255,255,255)
red=(255,0,0)
blue=(0,0,255)

# Environment
screen=pygame.display.set_mode([screen_w,screen_h])
icon = pygame.image.load('./images/icon.jpg')
pygame.display.set_icon(icon)
pygame.display.set_caption('Cookbook')
clock=pygame.time.Clock()

arial_25 = pygame.font.SysFont('arial',25)
clock_font = pygame.font.SysFont('DejaVu Sans Mono',26)

#Loading Sequence
op1 = pygame.image.load('./images/Loading.png').convert_alpha()
op1 = pygame.transform.scale(op1,(screen_w,screen_h))
op2 = pygame.image.load('./images/Home.png').convert_alpha()
op2 = pygame.transform.scale(op2,(screen_w,screen_h))
op3 = pygame.image.load('./images/start.png').convert_alpha()
op3 = pygame.transform.scale(op3,(screen_w,screen_h))

menu_level = None

# Check voice command
def check(cmd):
    if cmd in COMMANDS:
        return cmd
    return None

# Voice detection
def voice_detect():
    f = open("test.txt", mode='r+', encoding='utf-8')
    command = f.read()
    found = check(command)
    if found is not None:
        print("[main] Command found: " + found)
        f.truncate(0)
        f.close()
        return found
    else:
        f.close()
        return None

# Button get pressed
def button(msg,x,y,w,h,ap,ic,ac):
    valid = False
    mouse = pygame.mouse.get_pos()
    click = pygame.mouse.get_pressed()
    if x+w>mouse[0]>x and y+h>mouse[1]>y:
        pygame.draw.rect(screen,ac,(x-ap,y-ap,w+2*ap,h+2*ap))
        if click[0] == 1:
            valid = True
            time.sleep(0.1)
            
    else:
        pygame.draw.rect(screen,ic,(x,y,w,h))
    txt = arial_25.render(msg,True,black)
    txt_rect = txt.get_rect()
    txt_rect.center = ((x+w/2),(y+h/2))
    screen.blit(txt,txt_rect)
    return valid

# ----- Spaghetti -----
def Spaghetti():
    intro = True
    spaghettiimg1 = pygame.image.load('./images/spaghetti1.jpg').convert()# spaghetti picture
    spaghettiimg2 = pygame.image.load('./images/spaghetti2.jpg').convert()# ingredients list
    global menu_level
    start = pygame.time.get_ticks()

    while intro:
        for event in pygame.event.get():
            if event.type == pygame.QUIT:
                pygame.quit()
                quit()

        # Display current time
        theTime=time.strftime("%H:%M:%S", time.localtime())
        timeText=clock_font.render(str(theTime), True,(0,0,0))

        # Show ingredients
        if pygame.time.get_ticks()-start<2000:# For 2 seconds, display the spaghetti picture
            screen.blit(spaghettiimg1,(0,0))
        else: #After 2 seconds have passed, display the ingredients list
            screen.blit(spaghettiimg2,(0,0))

        screen.blit(timeText, (670,0))
        
        # Check voice command from external txt file
        voice = voice_detect()

        # buttons (msg,x,y,w,h,ap, color-before-hover, color-during-hover,action)
        if button('Show Steps',340,440,130,35,3,(242,242,242),(255,255,153)) or voice == "steps":
            intro = False
            menu_level = Spaghetti_steps# Move to steps
        if button('Menu',0,0,100,40,3,(242,242,242),(255,255,153)) or voice == "menu":
            intro = False
            menu_level = menu# Move to menu
        ###

        pygame.display.update()
        clock.tick(60)

# Each of the recipes that follow are formatted the same way. All that changes are pictures and videos.
def Spaghetti_steps():
    intro = True
    spaghettiimg = pygame.image.load('./images/spaghetti3.jpg').convert()
    global menu_level
    timer = pygame.time.Clock()
    timer_running = False
    frame_count = 0
    frame_rate = 60
    start_time = 90

    while intro:
        for event in pygame.event.get():
            if event.type == pygame.QUIT:
                pygame.quit()
                quit()

        # Display current local time
        theTime=time.strftime("%H:%M:%S", time.localtime())
        timeText=clock_font.render(str(theTime), True,(0,0,0))

        screen.blit(spaghettiimg,(0,0))
        screen.blit(timeText, (670,0))

        # Check voice command from external txt file
        voice = voice_detect()

        # buttons (msg,x,y,w,h,ap, color-before-hover, color-during-hover,action)
        # Display Timer Buttons 
        if(timer_running):
            if button('Reset Timer',670,40,130,35,3,(242,242,242),(255,255,153)) or voice == "reset": # Timer
                timer_running = False
        else:
            if button('Start Timer',670,40,130,35,3,(242,242,242),(255,255,153)) or voice == "timer": # Timer
                timer_running = True

        # Timer countsdown
        if(timer_running):
            total_seconds = start_time - (frame_count // frame_rate)
            if total_seconds < 0:
                total_seconds = 0
            minutes = total_seconds // 60
            seconds = total_seconds % 60
            output_string = "Time left: {0:02}:{1:02}".format(minutes, seconds)
            mytimer = clock_font.render(output_string, True, (0,0,0))
            frame_count += 1
            screen.blit(mytimer, (250,0))
            timer.tick(frame_rate)
        else:
            frame_count = 0
            start_time = 90

        # Video button
        if button('Video',670,90,130,35,3,(242,242,242),(255,255,153)) or voice == "video": # Video
            os.system('mplayer -fs -x 800 -y 480 ./videos/spaghetti.mp4')
        # Back button
        if button('Back',0,0,100,40,3,(242,242,242),(255,255,153)) or voice == "back": # Back
            intro = False
            menu_level = Spaghetti

        pygame.display.update()
        clock.tick(60)
    
# ----- Fish -----

def Fish():
    intro = True
    fishimg1 = pygame.image.load('./images/fish1.jpg').convert()
    fishimg2 = pygame.image.load('./images/fish2.jpg').convert()
    global menu_level
    start = pygame.time.get_ticks()

    while intro:
        for event in pygame.event.get():
            if event.type == pygame.QUIT:
                pygame.quit()
                quit()

        
        theTime=time.strftime("%H:%M:%S", time.localtime())
        timeText=clock_font.render(str(theTime), True,(0,0,0))

        
        if pygame.time.get_ticks()-start<2000:
            screen.blit(fishimg1,(0,0))
        else:
            screen.blit(fishimg2,(0,0))

        screen.blit(timeText, (670,0))

        voice = voice_detect()

        # buttons (msg,x,y,w,h,ap, color-before-hover, color-during-hover,action)
        if button('Show Steps',340,440,130,35,3,(242,242,242),(255,255,153)) or voice == "steps":
            intro = False
            menu_level = Fish_steps
        if button('Menu',0,0,100,40,3,(242,242,242),(255,255,153)) or voice == "menu":
            intro = False
            menu_level = menu
        ###

        pygame.display.update()
        clock.tick(60)

def Fish_steps():
    intro = True
    spaghettiimg = pygame.image.load('./images/fish3.jpg').convert()
    global menu_level
    timer = pygame.time.Clock()
    timer_running = False
    frame_count = 0
    frame_rate = 60
    start_time = 90

    while intro:
        for event in pygame.event.get():
            if event.type == pygame.QUIT:
                pygame.quit()
                quit()

        # Display current local time
        theTime=time.strftime("%H:%M:%S", time.localtime())
        timeText=clock_font.render(str(theTime), True,(0,0,0))

        screen.blit(spaghettiimg,(0,0))
        screen.blit(timeText, (670,0))

        voice = voice_detect()
        # buttons (msg,x,y,w,h,ap, color-before-hover, color-during-hover,action)
        # Display Timer Buttons 
        if(timer_running):
            if button('Reset Timer',670,40,130,35,3,(242,242,242),(255,255,153)) or voice == "reset": # Timer
                timer_running = False
        else:
            if button('Start Timer',670,40,130,35,3,(242,242,242),(255,255,153)) or voice == "timer": # Timer
                timer_running = True

        # Timer countsdown
        if(timer_running):
            total_seconds = start_time - (frame_count // frame_rate)
            if total_seconds < 0:
                total_seconds = 0
            minutes = total_seconds // 60
            seconds = total_seconds % 60
            output_string = "Time left: {0:02}:{1:02}".format(minutes, seconds)
            mytimer = clock_font.render(output_string, True, (0,0,0))
            frame_count += 1
            screen.blit(mytimer, (250,0))
            timer.tick(frame_rate)
        else:
            frame_count = 0
            frame_rate = 60
            start_time = 90

        # Video button
        if button('Video',670,90,130,35,3,(242,242,242),(255,255,153)) or voice == "video": # Video
            os.system('mplayer -fs -x 800 -y 480 ./videos/fish.mp4')
        # Back button
        if button('Back',0,0,100,40,3,(242,242,242),(255,255,153)) or voice == "back": # Back
            intro = False
            menu_level = Fish

        pygame.display.update()
        clock.tick(60)

# ----- Chicken -----
def Chicken():
    intro = True
    chickenimg1 = pygame.image.load('./images/chicken1.jpg').convert()
    chickenimg2 = pygame.image.load('./images/chicken2.jpg').convert()
    global menu_level
    start = pygame.time.get_ticks()

    while intro:
        for event in pygame.event.get():
            if event.type == pygame.QUIT:
                pygame.quit()
                quit()

        
        theTime=time.strftime("%H:%M:%S", time.localtime())
        timeText=clock_font.render(str(theTime), True,(0,0,0))

        
        if pygame.time.get_ticks()-start<2000:
            screen.blit(chickenimg1,(0,0))
        else:
            screen.blit(chickenimg2,(0,0))

        screen.blit(timeText, (670,0))

        voice = voice_detect()

        # buttons (msg,x,y,w,h,ap, color-before-hover, color-during-hover,action)
        if button('Show Steps',340,440,130,35,3,(242,242,242),(255,255,153)) or voice == "steps":
            intro = False
            menu_level = Chicken_steps
        if button('Menu',0,0,100,40,3,(242,242,242),(255,255,153)) or voice == "menu":
            intro = False
            menu_level = menu
        ###

        pygame.display.update()
        clock.tick(60)

def Chicken_steps():
    intro = True
    spaghettiimg = pygame.image.load('./images/chicken3.jpg').convert()
    global menu_level
    timer = pygame.time.Clock()
    timer_running = False
    frame_count = 0
    frame_rate = 60
    start_time = 90

    while intro:
        for event in pygame.event.get():
            if event.type == pygame.QUIT:
                pygame.quit()
                quit()

        # Display current local time
        theTime=time.strftime("%H:%M:%S", time.localtime())
        timeText=clock_font.render(str(theTime), True,(0,0,0))

        screen.blit(spaghettiimg,(0,0))
        screen.blit(timeText, (670,0))

        voice = voice_detect()

        # buttons (msg,x,y,w,h,ap, color-before-hover, color-during-hover,action)
        # Display Timer Buttons 
        if(timer_running):
            if button('Reset Timer',670,40,130,35,3,(242,242,242),(255,255,153)) or voice == "reset": # Timer
                timer_running = False
        else:
            if button('Start Timer',670,40,130,35,3,(242,242,242),(255,255,153)) or voice == "timer": # Timer
                timer_running = True

        # Timer countsdown
        if(timer_running):
            total_seconds = start_time - (frame_count // frame_rate)
            if total_seconds < 0:
                total_seconds = 0
            minutes = total_seconds // 60
            seconds = total_seconds % 60
            output_string = "Time left: {0:02}:{1:02}".format(minutes, seconds)
            mytimer = clock_font.render(output_string, True, (0,0,0))
            frame_count += 1
            screen.blit(mytimer, (250,0))
            timer.tick(frame_rate)
        else:
            frame_count = 0
            frame_rate = 60
            start_time = 90

        # Video button
        if button('Video',670,90,130,35,3,(242,242,242),(255,255,153)) or voice == "video": # Video
            os.system('mplayer -fs -x 800 -y 480 ./videos/chicken.mp4')
        # Back button
        if button('Back',0,0,100,40,3,(242,242,242),(255,255,153)) or voice == "back": # Back
            intro = False
            menu_level = Chicken

        pygame.display.update()
        clock.tick(60)

# ----- Soup -----
def Soup():
    intro = True
    soupimg1 = pygame.image.load('./images/soup1.jpg').convert()
    soupimg2 = pygame.image.load('./images/soup2.jpg').convert()
    global menu_level
    start = pygame.time.get_ticks()

    while intro:
        for event in pygame.event.get():
            if event.type == pygame.QUIT:
                pygame.quit()
                quit()

        
        theTime=time.strftime("%H:%M:%S", time.localtime())
        timeText=clock_font.render(str(theTime), True,(0,0,0))

        
        if pygame.time.get_ticks()-start<2000:
            screen.blit(soupimg1,(0,0))
        else:
            screen.blit(soupimg2,(0,0))

        screen.blit(timeText, (670,0))

        voice = voice_detect()

        # buttons (msg,x,y,w,h,ap, color-before-hover, color-during-hover,action)
        if button('Show Steps',340,440,130,35,3,(242,242,242),(255,255,153)) or voice == "steps":
            intro = False
            menu_level = Soup_steps
        if button('Menu',0,0,100,40,3,(242,242,242),(255,255,153)) or voice == "menu":
            intro = False
            menu_level = menu
        ###

        pygame.display.update()
        clock.tick(60)

def Soup_steps():
    intro = True
    spaghettiimg = pygame.image.load('./images/soup3.jpg').convert()
    global menu_level
    timer = pygame.time.Clock()
    timer_running = False
    frame_count = 0
    frame_rate = 60
    start_time = 90

    while intro:
        for event in pygame.event.get():
            if event.type == pygame.QUIT:
                pygame.quit()
                quit()

        # Display current local time
        theTime=time.strftime("%H:%M:%S", time.localtime())
        timeText=clock_font.render(str(theTime), True,(0,0,0))

        screen.blit(spaghettiimg,(0,0))
        screen.blit(timeText, (670,0))

        voice = voice_detect()
        # buttons (msg,x,y,w,h,ap, color-before-hover, color-during-hover,action)
        # Display Timer Buttons 
        if(timer_running):
            if button('Reset Timer',670,40,130,35,3,(242,242,242),(255,255,153)) or voice == "reset": # Timer
                timer_running = False
        else:
            if button('Start Timer',670,40,130,35,3,(242,242,242),(255,255,153)) or voice == "timer": # Timer
                timer_running = True

        # Timer countsdown
        if(timer_running):
            total_seconds = start_time - (frame_count // frame_rate)
            if total_seconds < 0:
                total_seconds = 0
            minutes = total_seconds // 60
            seconds = total_seconds % 60
            output_string = "Time left: {0:02}:{1:02}".format(minutes, seconds)
            mytimer = clock_font.render(output_string, True, (0,0,0))
            frame_count += 1
            screen.blit(mytimer, (250,0))
            timer.tick(frame_rate)
        else:
            frame_count = 0
            frame_rate = 60
            start_time = 90

        # Video button
        if button('Video',670,90,130,35,3,(242,242,242),(255,255,153)) or voice == "video": # Video
            os.system('mplayer -fs -x 800 -y 480 ./videos/soup.mp4')
        # Back button
        if button('Back',0,0,100,40,3,(242,242,242),(255,255,153)) or voice == "back": # Back
            intro = False
            menu_level = Soup

        pygame.display.update()
        clock.tick(60)

# ----- Sushi -----
def Sushi():
    intro = True
    sushiimg1 = pygame.image.load('./images/sushi1.jpg').convert()
    sushiimg2 = pygame.image.load('./images/sushi2.jpg').convert()
    global menu_level
    start = pygame.time.get_ticks()

    while intro:
        for event in pygame.event.get():
            if event.type == pygame.QUIT:
                pygame.quit()
                quit()

        
        theTime=time.strftime("%H:%M:%S", time.localtime())
        timeText=clock_font.render(str(theTime), True,(0,0,0))

        
        if pygame.time.get_ticks()-start<2000:
            screen.blit(sushiimg1,(0,0))
        else:
            screen.blit(sushiimg2,(0,0))

        screen.blit(timeText, (670,0))

        voice = voice_detect()

        # buttons (msg,x,y,w,h,ap, color-before-hover, color-during-hover,action)
        if button('Show Steps',340,440,130,35,3,(242,242,242),(255,255,153)) or voice == "steps":
            intro = False
            menu_level = Sushi_steps
        if button('Menu',0,0,100,40,3,(242,242,242),(255,255,153)) or voice == "menu":
            intro = False
            menu_level = menu
        ###

        pygame.display.update()
        clock.tick(60)

def Sushi_steps():
    intro = True
    spaghettiimg = pygame.image.load('./images/sushi3.jpg').convert()
    global menu_level
    timer = pygame.time.Clock()
    timer_running = False
    frame_count = 0
    frame_rate = 60
    start_time = 90

    while intro:
        for event in pygame.event.get():
            if event.type == pygame.QUIT:
                pygame.quit()
                quit()

        # Display current local time
        theTime=time.strftime("%H:%M:%S", time.localtime())
        timeText=clock_font.render(str(theTime), True,(0,0,0))

        screen.blit(spaghettiimg,(0,0))
        screen.blit(timeText, (670,0))

        voice = voice_detect()

        # buttons (msg,x,y,w,h,ap, color-before-hover, color-during-hover,action)
        # Display Timer Buttons 
        if(timer_running):
            if button('Reset Timer',670,40,130,35,3,(242,242,242),(255,255,153)) or voice == "reset": # Timer
                timer_running = False
        else:
            if button('Start Timer',670,40,130,35,3,(242,242,242),(255,255,153)) or voice == "timer": # Timer
                timer_running = True

        # Timer countsdown
        if(timer_running):
            total_seconds = start_time - (frame_count // frame_rate)
            if total_seconds < 0:
                total_seconds = 0
            minutes = total_seconds // 60
            seconds = total_seconds % 60
            output_string = "Time left: {0:02}:{1:02}".format(minutes, seconds)
            mytimer = clock_font.render(output_string, True, (0,0,0))
            frame_count += 1
            screen.blit(mytimer, (250,0))
            timer.tick(frame_rate)
        else:
            frame_count = 0
            frame_rate = 60
            start_time = 90

        # Video button
        if button('Video',670,90,130,35,3,(242,242,242),(255,255,153)) or voice == "video": # Video
            os.system('mplayer -fs -x 800 -y 480 ./videos/sushi.mp4')
        # Back button
        if button('Back',0,0,100,40,3,(242,242,242),(255,255,153)) or voice == "back": # Back
            intro = False
            menu_level = Sushi

        pygame.display.update()
        clock.tick(60)

def menu():
    intro = True
    menuimg = pygame.image.load('./images/menu.jpg').convert()    
    global menu_level

    while intro:
        for event in pygame.event.get():
            if event.type == pygame.QUIT:
                pygame.quit()
                quit()
        
        # Display current local time
        theTime=time.strftime("%H:%M:%S", time.localtime())
        timeText=clock_font.render(str(theTime), True,(0,0,0))

        screen.blit(menuimg,(0,0))
        screen.blit(timeText, (670,0))

        # Check voice command from external txt file
        voice = voice_detect()

        # Food buttons
        if button('Spaghetti',45,430,100,40,3,(242,242,242),(255,255,153)) or voice == "spaghetti":
            intro = False
            menu_level = Spaghetti
        if button('Fish',197,430,100,40,3,(242,242,242),(255,255,153)) or voice == "fish":
            intro = False
            menu_level = Fish
        if button('Chicken',348,430,100,40,3,(242,242,242),(255,255,153)) or voice == "chicken":
            intro = False
            menu_level = Chicken
        if button('Soup',500,430,100,40,3,(242,242,242),(255,255,153)) or voice == "soup":
            intro = False
            menu_level = Soup
        if button('Sushi',652,430,100,40,3,(242,242,242),(255,255,153)) or voice == "sushi":
            intro = False
            menu_level = Sushi
        if button('X',760,40,40,40,3,(242,242,242),(255,255,153)) or voice == "quit":
            intro = False
            menu_level = quit_cookbook

        pygame.display.update()
        clock.tick(60)

def op():
    global menu_level
    start = pygame.time.get_ticks()
    
    cinematic = True
    while cinematic:
        for event in pygame.event.get():
            if event.type == pygame.QUIT:
                    pygame.quit()
                    quit()
            if event.type == pygame.KEYDOWN and event.key == pygame.K_ESCAPE or event.type == pygame.MOUSEBUTTONDOWN:
                menu_level = menu
                cinematic = False

        # First splash screen (start time < 2secs)
        if pygame.time.get_ticks()-start<2000:
            screen.blit(op1,(0,0))
        # Second splash screen (2secs <= start time < 4secs)
        elif 2000<pygame.time.get_ticks()-start<4000:
            screen.blit(op2,(0,0))
        # Third splash screen with start button
        else:
            screen.blit(op3,(0,0))
            # Check voice command is "start" or no
            voice = voice_detect()
            # Touch detection
            if event.type == pygame.KEYDOWN or event.type == pygame.MOUSEBUTTONDOWN or voice == "start":
                menu_level = menu
                cinematic = False
        pygame.display.update()
        clock.tick(60)

# Quit
def quit_cookbook():
    pygame.quit()
    sys.exit()

# Main Entrance func.
def run():
    global menu_level
    menu_level = op
    while True:
        menu_level()

if __name__ == "__main__":
    run()

Voice_and_Motion_Control.py

import time
import speech_recognition as sr
import RPi.GPIO as GPIO
from record import RecordAudio
import speech
from gpiozero import Servo
import keyboard

freq = 50
direction = False #left
move = False
position = "mid"
inrange= False
running = True
back = False

# Support commands dict
COMMANDS = {
    'start': ['START', 'STARTS'], # Multiple command options account for audio processing mistakes
    'back': ['BACK'],
    'steps': ['STEP', 'STEPS','NEXT'],
    'video': ['VIDEO', 'VIDEOS'],
    'spaghetti': ['SPAGHETTI'],
    'fish': ['FISH'],
    'chicken': ['CHICKEN'],
    'soup': ['SOUP'],
    'sushi': ['SUSHI'],
    'menu': ['MENU'],
    'timer': ['TIMER'],
    'reset': ['RESET'],
    'pause': ['PAUSE', 'CONTINUE'],
    'stop': ['STOP'],
    'quit': ['QUIT'],
    'motion': ['MOTION', 'MOTIONS', 'NOTION']
}

# Hardware interrupt (Active low interrupt. Triggered when button is pressed connecting the pin to ground.)
def GPIO16_callback(channel):
    global back
    print ("falling edge detected on 16")
    back = True
    
    
# Record an audio and send to Google
def recognize(rec, mic):
    running = True
    result  = ""
    print("[main] Recognizing...")
    
    GPIO.output(21,GPIO.HIGH)
    mic.record()
    GPIO.output(21,GPIO.LOW)
    result = speech.recognize(mic.save())
    time.sleep(1)

    return result

# Check if the speech command is in the command dict.
def check(cmd):
    for key, val in COMMANDS.items():
        if cmd in val:
            return key
    return None

def run():
    global freq
    global direction 
    global move
    global position 
    global running
    global back 
    servo = Servo(13)
    val=0
    servo.value = val
    
    # Initialize speech recognition
    rec = sr.Recognizer()
    mic = RecordAudio()
    motion = False
    start_time = 0
    
    while running:
        if back == True: # Execute shutdown protocol
            print("button pressed")
            running = False # Before ending script, return servo to initial position (val = 0)
            while val < 0:
                print(val)
                val = val + .05
                servo.value = val
                time.sleep(.1)
            while val >0:
                print(val)
                val = val - .05
                servo.value = val
                time.sleep(0.1)
            break
        # Voice detection part
        if motion == False:
            # Introduction
            print("[main] Please speak a command")
            command = None
            while command is None: 
                if back == True:
                  break
                command = recognize(rec, mic)
            if back == True:
                  continue
            print("[Result] " + command)

            found = check(command.upper())
            if found is not None:
                print("[main] Command found: " + found)
                f = open("test.txt", mode='w', encoding='utf-8')
                f.write(found)
                f.close()
                #fifo_cmd = 'echo ' + command + ' > test_fifo'
                #subprocess.check_output(fifo_cmd, shell=True)
    
            # Start motion control
            if command.upper() == 'MOTION':
                start_time = time.time()
                motion = True
            # Stop execution if EXIT
            if command.upper() == 'QUIT':
                running = False
        # Motion control part        
        else: # User has said 'MOTION'
            if time.time() - start_time > 10:# 10 seconds for motion control operation
                motion = False   
            GPIO.output(4,GPIO.LOW)# Trigger (Left sensor)
            GPIO.output(23,GPIO.LOW)# Trigger (Right sensor)
            #print ("Waiting for sensors to settle")
            time.sleep(.01)
            #print ("Calculating distances")
           
            GPIO.output(4,GPIO.HIGH)
            time.sleep(0.00001)
            GPIO.output(4,GPIO.LOW)
            while GPIO.input(17)==0: # Echo (Left sensor)
                pulse_start_time = time.time()
            #print('bit flip')
            while GPIO.input(17)==1: # Echo recieved (left sensor)
                pulse_end_time=time.time()
            pulse_duration = pulse_end_time - pulse_start_time
            Ldistance = round(pulse_duration *17150, 2) # compute left distance
            #print ("Left Distance:",Ldistance,"cm")
            
            GPIO.output(23,GPIO.HIGH)
            time.sleep(0.00001)
            GPIO.output(23,GPIO.LOW)
            while GPIO.input(24)==0:
                pulse_start_time = time.time()
            #print('bit flip')
            while GPIO.input(24)==1:
                pulse_end_time=time.time()
            pulse_duration = pulse_end_time - pulse_start_time
            Rdistance = round(pulse_duration *17150, 2) # compute right distance
            #print ("                          Right Distance:",Rdistance,"cm")
        
            
            if Ldistance < 8:# decrement motor if left sensor activated
                if val > -0.95:# Ensure servo val does not exceed limit
                   servo.value = val
                   val = val-0.05
                   print(val)
                   time.sleep(.1)
            
            if Rdistance < 8:# increment if right sensor activated
                if val < 0.95:# Ensure servo val does not exceed limit
                   servo.value = val
                   val = val+0.05
                   print(val) 
                   time.sleep(.1)
    
            
            if keyboard.is_pressed("space"):# Execute shutdown protocol if spacebar is pressed
                running = False
                    while val < 0:
                        print(val)
                        val = val + .05
                        servo.value = val
                        time.sleep(.1)
                    while val >0:
                        print(val)
                        val = val - .05
                        servo.value = val
                        time.sleep(0.1)



if __name__ == "__main__":
    GPIO.setmode(GPIO.BCM)
    GPIO.setup(21, GPIO.OUT)
    
    GPIO.setup(13, GPIO.OUT)
    GPIO.setup(4, GPIO.OUT)
    GPIO.setup(17, GPIO.IN)
    GPIO.setup(23, GPIO.OUT)
    GPIO.setup(24, GPIO.IN)
    GPIO.setup(16, GPIO.IN, pull_up_down=GPIO.PUD_UP)
    GPIO.add_event_detect(16, GPIO.FALLING, callback=GPIO16_callback, bouncetime=300)
    try:
        run()
    except KeyboardInterrupt:
        GPIO.cleanup()